Denoising Autoencoders for Overgeneralization in Neural Networks
نویسندگان
چکیده
منابع مشابه
Denoising Autoencoders for Overgeneralization in Neural Networks
Despite the recent developments that allowed neural networks to achieve impressive performance on a variety of applications, these models are intrinsically affected by the problem of overgeneralization, due to their partitioning of the full input space into the fixed set of target classes used during training. Thus it is possible for novel inputs belonging to categories unknown during training ...
متن کاملTransportation analysis of denoising autoencoders: a novel method for analyzing deep neural networks
The feature map obtained from the denoising autoencoder (DAE) is investigated by determining transportation dynamics of the DAE, which is a cornerstone for deep learning. Despite the rapid development in its application, deep neural networks remain analytically unexplained, because the feature maps are nested and parameters are not faithful. In this paper, we address the problem of the formulat...
متن کاملMultimodal Stacked Denoising Autoencoders
We propose a Multimodal Stacked Denoising Autoencoder for learning a joint model of data that consists of multiple modalities. The model is used to extract a joint representation that fuses modalities together. We have found that this representation is useful for classification tasks. Our model is made up of layers of denoising autoencoders which are trained locally to denoise corrupted version...
متن کاملDenoising Adversarial Autoencoders
Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts of unlabelled data to learn useful representations for inference. Autoencoders, a form of generative model, may be trained by learning to reconstruct unlabelled input data from a latent representation space. More robust representations may be produced by an autoencoder if it learns to recover clea...
متن کاملMarginalized Stacked Denoising Autoencoders
Stacked Denoising Autoencoders (SDAs) [4] have been used successfully in many learning scenarios and application domains. In short, denoising autoencoders (DAs) train one-layer neural networks to reconstruct input data from partial random corruption. The denoisers are then stacked into deep learning architectures where the weights are fine-tuned with back-propagation. Alternatively, the outputs...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Pattern Analysis and Machine Intelligence
سال: 2020
ISSN: 0162-8828,2160-9292,1939-3539
DOI: 10.1109/tpami.2019.2909876